Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent LoadBalancer updates on follower. #6872

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

tsaarni
Copy link
Member

@tsaarni tsaarni commented Jan 20, 2025

This PR fixes a memory leak that was triggered by LoadBalancer status updates. Only the leader instance runs loadBalancerStatusWriter and therefore follower does not have anyone reading from the channel that receives status updates. The LoadBalancer status updates are still watched by followers and sent to the channel, causing the go routine calling ServiceStatusLoadBalancerWatcher.notify() to block. This led to LoadBalancerStatus updates piling up and consuming memory, eventually causing an out-of-memory condition and killing the Contour process.

Fixes #6860

@tsaarni tsaarni requested a review from a team as a code owner January 20, 2025 18:20
@tsaarni tsaarni requested review from skriss and sunjayBhatia and removed request for a team January 20, 2025 18:20
@sunjayBhatia sunjayBhatia requested review from a team, davinci26 and izturn and removed request for a team January 20, 2025 18:20
Follower instances of Contour do not run loadBalancerStatusWriter and
therefore do not read from the channel that receives status updates.
The LoadBalancer status updates are still watched and sent to a channel,
causing the go routine to block.

This led to LoadBalancer updates piling up and consuming memory, eventually
causing an out-of-memory condition and killing the Contour process.

Signed-off-by: Tero Saarni <[email protected]>
@tsaarni tsaarni force-pushed the lbstatus-on-follower branch from 8b2af7d to 920d0ce Compare January 20, 2025 18:22
@tsaarni tsaarni added the release-note/small A small change that needs one line of explanation in the release notes. label Jan 20, 2025
Copy link

codecov bot commented Jan 20, 2025

Codecov Report

Attention: Patch coverage is 45.45455% with 6 lines in your changes missing coverage. Please review.

Project coverage is 81.04%. Comparing base (32dad5f) to head (920d0ce).
Report is 19 commits behind head on main.

Files with missing lines Patch % Lines
cmd/contour/serve.go 0.00% 6 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #6872      +/-   ##
==========================================
- Coverage   81.05%   81.04%   -0.02%     
==========================================
  Files         133      133              
  Lines       20026    20034       +8     
==========================================
+ Hits        16232    16236       +4     
- Misses       3500     3504       +4     
  Partials      294      294              
Files with missing lines Coverage Δ
internal/k8s/statusaddress.go 83.05% <100.00%> (+0.39%) ⬆️
cmd/contour/serve.go 21.65% <0.00%> (-0.10%) ⬇️

func (s *ServiceStatusLoadBalancerWatcher) notify(lbstatus core_v1.LoadBalancerStatus) {
s.LBStatus <- lbstatus
if s.leader.Load() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to lose the envoy service event in this way?

Copy link
Member Author

@tsaarni tsaarni Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. Currently the event processing logic relies on Added/Updated/Deleted events alone. It works for other resource types as they will be processed by all Contour instances but I think new approach is needed with the load balancer status updates.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can simply handle it. Notify the latest envoy service when it becomes Leader.

@tsaarni tsaarni marked this pull request as draft January 21, 2025 18:21
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. release-note/small A small change that needs one line of explanation in the release notes.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Non-leader contour controller pod memory keeps increasing until OOM
2 participants